Rubrics for designing and evaluating online asynchronous discussions

نویسندگان

  • Lana Penny
  • Elizabeth Murphy
چکیده

The purpose of the study reported on in this paper was to identify performance criteria and ratings in rubrics designed for the evaluation of learning in online asynchronous discussions (OADs) in post-secondary contexts. We analysed rubrics collected from Internet sources. Using purposive sampling, we reached saturation with the selection of 50 rubrics. Using keyword analysis and subsequent grouping of keywords into categories, we identified 153 performance criteria in 19 categories and 831 ratings in 40 categories. We subsequently identified four core categories as follows: cognitive (44.0%), mechanical (19.0%), procedural/managerial (18.29%) and interactive (17.17%). Another 1.52% of ratings and performance criteria were labelled vague and not assigned to any core category. Introduction Online asynchronous discussions (OADs) are a form of computer-mediated communication (CMC) increasingly used in post-secondary distance learning (Campus Computing International, 2000, p. 5). Asynchronous conferencing is ‘the second most commonly used capability for online education’, after email (Kearsley, 2000, p. 30), and has been referred to as ‘a powerful tool for group communication and cooperative learning that promotes a level of reflective interaction often lacking in a face-to-face, teacher-centred classroom’ (Rovai & Jordan, 2004, p. 2). Some research has uncovered evidence that participation in OADs can promote shared knowledge bases (Sherry, 2000), higher levels of thinking (Kanuka, 2005), reflective thinking, and collaboration (Markel, 2001), problem solving (Cho & Jonassen, 2002), knowledge construction (Gunawardena, Lowe & Anderson, 1997), critical thinking and cognitive presence (Garrison, Anderson & Archer, 2003). British Journal of Educational Technology Vol 40 No 5 2009 804–820 doi:10.1111/j.1467-8535.2008.00895.x © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Although OADs offer the potential for realisation of many benefits, they do not guarantee that these benefits will automatically be realised (Murphy, 2004a). Participants in text-based discussions may experience difficulty processing and interpreting information (Gunawardena et al, 1997; Henri, 1992). They may remain at a comparing and sharing stage of knowledge rather than embarking on a more interactive and collaborative discussion that could promote higher levels of learning and critical thinking skills (Kanuka, 2005; Kanuka & Anderson, 1998; Pawan, Paulus, Yalcin & Chang, 2003). Bullen (1998) found ‘limited empirical support ... for the claims made about the potential of computer conferencing to facilitate higher level thinking’ (p. 2). One method of verifying what, if any, benefits are realised in an OAD is transcript analysis. Transcript analysis involves the unitising and categorising of conference messages and the analysis of the resultant patterns of communication (Kanuka & Anderson, 1998). However, Rourke, Anderson, Garrison and Archer (2001) have described it as ‘difficult, frustrating, and time-consuming’ (p. 2). They provide a fictional account of a faculty member attempting to use transcript analysis to measure her students’ achievements. She is beset by problems including technique, time constraints, reliability and ethical considerations. The account illustrates that transcript analysis is a technique more suited for researchers than for instructors. As the use of OADs increases, instructors also need a method to evaluate their students’ engagement in processes such as critical thinking, problem solving or knowledge construction. One method that has received attention from instructors is the use of rubrics. Rubrics are evaluation tools that clarify what is important to evaluate (Moskal, 2000) and that ‘contain qualitative descriptions of performance criteria that work well within the process of formative evaluation’ (Tierney & Simon, 2004, p. 1). Edelstein and Edwards (2002) found that rubrics can provide ‘feedback regarding the effectiveness of a student’s participation in a threaded discussion and offer benchmarks against which to measure and document progress’ (¶ 13–14). Gilbert and Dabbagh (2005) found that rubrics ‘positively influenced meaningful discourse in asynchronous online discussions’ (¶ 16). Two of the essential components of a rubric are the performance criteria and definitions (or ratings) (Popham, 1997). Performance (Arter, 2000) or evaluative (Popham, 1997) criteria identify the specific elements, or dimensions, of the task taught and assessed by the rubric (Jonassen, Howland, Moore & Marra, 2003; Popham, 1997; Tierney & Simon, 2004) and provide ‘guidelines, rules, or principles by which student responses, products, or performances are judged’ (Arter & McTighe, 2001, p. 180). Ratings ‘describe the way that qualitative differences in students’ responses are to be judged’ (Popham, 1997, p. 1), highlighting the difference between a performance that is assessed as fair or poor with a performance assessed as good or excellent. Online asynchronous discussions 805 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. We uncovered no studies in our review of the literature that systematically identified the performance criteria or ratings assessed by rubrics for use in the evaluation of online discussions. The goal of the study reported on in this paper was to identify the range, type and percentage of performance criteria used in the rubrics for online discussions. For example: what behaviors and performances do instructors focus on, eg, problem solving, critical thinking? We also sought to identify the range, type and percentage of ratings used in the rubrics and to categorise the range and type. Methods We used four sets of search terms in GoogleTM and Google ScholarTM to locate rubrics. The first search term was simply rubrics. The second set of search terms used the following key words or phrases within quotation marks: asynchronous discussions, online discussions, discussion boards, CMC, computer-mediated communication, discussion forums and discussion fora. The third set of search terms pairs a keyword, or combination of words, with either rubrics; scoring guides; evaluate; assess; evaluation guide; post-secondary with the keywords used in the second search. The fourth set of search terms is as follows: discussion rubric, discussion board rubric, asynchronous discussion rubric and online discussion rubric. We initially examined the rubrics to determine which statements were performance criteria and which were ratings. In some rubrics, row or column labels such as category or criteria explicitly identify performance criteria. However, not all rubrics use descriptive labels. In some cases, we identified performance criteria in the rubrics by reading the statements to determine if the statement was a performance criterion or a rating. For example, the statement ‘Number of posts’ qualifies as a performance criterion because it describes a specific dimension of student work assessed by the ratings. We coded statements that described a performance or activity as performance criteria and coded statements that assessed a performance or activity as ratings. We assigned criteria found in the rubrics to categories based on patterns or recurring keywords, a process Miles and Huberman (1994) referred to as descriptive coding. The next stage of analysis consisted of grouping performance criteria categories that described similar types of performances or tasks, and grouping ratings’ categories that assessed similar performances or tasks. This process of interpretively (Miles & Huberman) amalgamating descriptive criteria and ratings’ categories continued throughout this stage of coding. In the final stage of coding, we examined criteria and ratings’ categories to determine if any of the categories could be associated with any other. This inferential and explanatory process (Miles & Huberman) led to the assignment of the categories into a smaller number of core categories, each representing a single theme. Coding resulted in the generation of core categories. 806 British Journal of Educational Technology Vol 40 No 5 2009 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. Findings From the 50 rubrics reviewed for this study, we identified 153 performance criteria and 831 ratings (Tables 1–8). We organised them into categories based on keyword analysis, then amalgamated them into 19 performance criteria categories and 40 Table 1: Performance criteria categories assigned to the cognitive core category Performance criteria category % of category Other 2.8 Thinking and reflection 2.8 Analysis, evaluation, interpretation, application and synthesis 1.6 Quality and relevance 1.6 Arguments 1.2 Ideas, insights, connections and links 1.2 Content 0.9 Feedback, incorporation, interweave and integration 0.5 References and support 0.2 Table 2: Ratings’ categories assigned to the cognitive core category Ratings category % of category Thinking, reflection and reasoning 12.1 Understand, comprehend and grasp 7.7 Analysis, evaluation, summarisation and synthesis 7.4 Content and information 6.7 Support 6.0 Connections and links 5.6 Original, creative, novel and new 5.1 Relevance and relationship 4.6 Response, reply and answer (discussion) 4.4 Application, explanation and interpretation 4.2 Miscellaneous 4.2 Evidence and argument 3.7 Opinions and insights 3.0 Ideas 2.8 Citations and references 2.6 Questions, problems and solutions 2.3 Concepts 1.6 Examples and sources 0.9 Weave, integrate and incorporate 0.9 Clarification, clarity and clear 0.7 Contribute and post 0.2 Feedback 0.2 Read and reading 0.2 Online asynchronous discussions 807 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. Table 3: Performance criteria categories assigned to the mechanical core category Performance criteria category % of category Writing and style 7.9 Expression, delivery, mechanics and organisation 4.2 References and support 3.1 Language and grammar 2.6 Table 4: Ratings’ categories assigned to the mechanical core category Ratings category % of category Grammar, spelling and punctuation 24.6 Citations and references 10.5 Mechanics, organisation, structure and expression 9.4 Language, sentence, paragraph, word and vocabulary 8.4 Writing, composition and style 6.8 Examples and sources 5.8 Opinions and insights 3.7 Clarification, clarity and clear 3.1 Response, reply and answer (discussion) 2.6 Miscellaneous 1.6 Resources 1.6 Read and reading 1.0 Support 1.0 Understand, comprehend and grasp 1.0 Content and information 0.5 Relevance and relationship 0.5 Table 5: Performance criteria categories assigned to the procedural/mechanical core category Performance criteria category % of category Timing, frequency and initiative 6.1 Participation 3.3 Best practices, etiquette and protocols 2.2 Expression, delivery, mechanics and organisation 1.1 Other 1.1 Quality and relevance 1.1 Content 0.6 Length 0.6 808 British Journal of Educational Technology Vol 40 No 5 2009 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. ratings’ categories, each describing similar types of performances or tasks. We subsequently analysed these categories for patterns to identify core categories (Miles & Huberman, 1994; Strauss & Corbin, 1990) as follows: cognitive (44.0%), mechanical (19.0%), procedural/managerial (18.29%) and interactive (17.17%). Another 1.52% of ratings and performance criteria were coded as vague and not assigned to any core category. Table 6: Ratings’ categories assigned to the procedural/mechanical core category Ratings category % of category Time, initiative and prompting 13.3 Hour, day, minute, date, deadline and late 11.6 Participation 11.0 Number 9.4 Etiquette and protocols 7.2 Frequently, regularly, freely, occasionally, rarely and sporadically 7.2 Quality, value, valid and good 5.5 Contribute and post 3.9 Miscellaneous 3.3 Read and reading 3.3 Respect, offensive and abusive 3.3 Response, reply and answer (discussion) 2.8 Opinions and insights 1.1 Ideas 0.6 Language, sentence, paragraph, word and vocabulary 0.6 Table 7: Performance criteria categories assigned to the interactive core category Performance criteria category % of category Response and reply 6.6 Other 3.0 Feedback, incorporation, interweave and integration 1.2 Interaction 1.2 References and support 1.2 Ideas, insights, connections and links 0.6 Online asynchronous discussions 809 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. Discussion of the findings

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Comparing asynchronous online discussions and face-to-face discussions in a classroom setting

The purpose of this study is to investigate the perceived differences between asynchronous online discussions and face-to-face discussions in a classroom setting. The students’ reflections were analysed by following a qualitative research approach. The results showed that atmosphere, response, efficiency, interactivity and communication were the top five themes that differ between asynchronous ...

متن کامل

A case-based approach for teaching professionalism to residents with online discussions

Introduction: Programs must demonstrate that their residentsare taught and assessed in professionalism. Most programsstruggle with finding viable ways to teach and assess this criticalcompetency. UTHSCSA Family and Community MedicineResidency developed an innovative option for interactive learningand assessment of residents in this competency which would betransferrable to other programs and sp...

متن کامل

How to structure online discussions for meaningful discourse: a case study

This study examined the impact of structuredness of asynchronous online discussion protocols and evaluation rubrics on meaningful discourse. Transcripts of twelve online discussions involving 87 participants from four sections of a graduate course entitled Instructional Technology Foundations and Learning Theory were analysed across four semesters. Protocols and evaluation rubrics guiding onlin...

متن کامل

Using Peer Feedback in Online Discussions to Improve Critical Thinking

This study sought to evaluate the effectiveness of a peer feedback strategy in asynchronous online discussions. Specifically, this exploratory study examined the impact of peer feedback in online discussions on students' perceived and actual critical thinking skills in terms of receiving and providing peer feedback Participant interviews and pre and post surveys targeting critical thinking skil...

متن کامل

An Analysis of Social Presence and Cognitive Presence in Discussion Forum

An increase of asynchronous online discussions in website provides much opportunity for L2 learners from different global communities to be exposed to the target language at their own pace and time. However, no research looking at the essentials of social presence and cognitive presence in creating a supportive learning environment in such a context has been done. This study investigated the pa...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • BJET

دوره 40  شماره 

صفحات  -

تاریخ انتشار 2009